45 research outputs found

    Humans Forget, Machines Remember: Artificial Intelligence and the Right to Be Forgotten

    Get PDF
    To understand the Right to be Forgotten in context of artificial intelligence, it is necessary to first delve into an overview of the concepts of human and AI memory and forgetting. Our current law appears to treat human and machine memory alike – supporting a fictitious understanding of memory and forgetting that does not comport with reality. (Some authors have already highlighted the concerns on the perfect remembering.) This Article will examine the problem of AI memory and the Right to be Forgotten, using this example as a model for understanding the failures of current privacy law to reflect the realities of AI technology. First, this Article analyzes the legal background behind the Right to be Forgotten, in order to understand its potential applicability to AI, including a discussion on the antagonism between the values of privacy and transparency under current E.U. privacy law. Next, the Authors explore whether the Right to be Forgotten is practicable or beneficial in an AI/machine learning context, in order to understand whether and how the law should address the Right to Be Forgotten in a post-AI world. The Authors discuss the technical problems faced when adhering to strict interpretation of data deletion requirements under the Right to be Forgotten, ultimately concluding that it may be impossible to fulfill the legal aims of the Right to be Forgotten in artificial intelligence environments. Finally, this Article addresses the core issue at the heart of the AI and Right to be Forgotten problem: the unfortunate dearth of interdisciplinary scholarship supporting privacy law and regulation

    Explainable Artificial Intelligence: Concepts, Applications, Research Challenges and Visions

    Get PDF
    The development of theory, frameworks and tools for Explainable AI (XAI) is a very active area of research these days, and articulating any kind of coherence on a vision and challenges is itself a challenge. At least two sometimes complementary and colliding threads have emerged. The first focuses on the development of pragmatic tools for increasing the transparency of automatically learned prediction models, as for instance by deep or reinforcement learning. The second is aimed at anticipating the negative impact of opaque models with the desire to regulate or control impactful consequences of incorrect predictions, especially in sensitive areas like medicine and law. The formulation of methods to augment the construction of predictive models with domain knowledge can provide support for producing human understandable explanations for predictions. This runs in parallel with AI regulatory concerns, like the European Union General Data Protection Regulation, which sets standards for the production of explanations from automated or semi-automated decision making. Despite the fact that all this research activity is the growing acknowledgement that the topic of explainability is essential, it is important to recall that it is also among the oldest fields of computer science. In fact, early AI was re-traceable, interpretable, thus understandable by and explainable to humans. The goal of this research is to articulate the big picture ideas and their role in advancing the development of XAI systems, to acknowledge their historical roots, and to emphasise the biggest challenges to moving forward

    Explainable Artificial Intelligence: Concepts, Applications, Research Challenges and Visions

    Get PDF
    International audienceThe development of theory, frameworks and tools for Explainable AI (XAI) is a very active area of research these days, and articulating any kind of coherence on a vision and challenges is itself a challenge. At least two sometimes complementary and colliding threads have emerged. The first focuses on the development of pragmatic tools for increasing the transparency of automatically learned prediction models, as for instance by deep or reinforcement learning. The second is aimed at anticipating the negative impact of opaque models with the desire to regulate or control impactful consequences of incorrect predictions, especially in sensitive areas like medicine and law. The formulation of methods to augment the construction of predictive models with domain knowledge can provide support for producing human understandable explanations for predictions. This runs in parallel with AI regulatory concerns, like the European Union General Data Protection Regulation, which sets standards for the production of explanations from automated or semi-automated decision making. Despite the fact that all this research activity is the growing acknowledgement that the topic of explainability is essential, it is important to recall that it is also among the oldest fields of computer science. In fact, early AI was re-traceable, interpretable, thus understandable by and explainable to humans. The goal of this research is to articulate the big picture ideas and their role in advancing the development of XAI systems, to acknowledge their historical roots, and to emphasise the biggest challenges to moving forward
    corecore